翻訳と辞書
Words near each other
・ Biased Competition Theory
・ Biased graph
・ Biased random walk on a graph
・ Biashush
・ Biasi
・ Biasing
・ Biasing (disambiguation)
・ Biasmia
・ Biasmia antennalis
・ Biasmia guttata
・ Biassini
・ Biasso
・ Biassoni
・ Biassono
・ Biastophilia
Bias–variance tradeoff
・ BIAT
・ Biatah language
・ Biatan-e Olya
・ Biatan-e Sofla
・ Biate
・ Biate people
・ Biatec
・ Biathanatos
・ Biathle
・ Biathlon
・ Biathlon (disambiguation)
・ Biathlon at the 1960 Winter Olympics
・ Biathlon at the 1960 Winter Olympics – Individual
・ Biathlon at the 1964 Winter Olympics


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bias–variance tradeoff : ウィキペディア英語版
Bias–variance tradeoff

In statistics and machine learning, the bias–variance tradeoff (or dilemma) is the problem of simultaneously minimizing two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:
* The ''bias'' is error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting).
* The ''variance'' is error from sensitivity to small fluctuations in the training set. High variance can cause overfitting: modeling the random noise in the training data, rather than the intended outputs.
The bias–variance decomposition is a way of analyzing a learning algorithm's expected generalization error with respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called the ''irreducible error'', resulting from noise in the problem itself.
This tradeoff applies to all forms of supervised learning: classification, regression (function fitting),〔Bias–variance decomposition, In Encyclopedia of Machine Learning. Eds. Claude Sammut, Geoffrey I. Webb. Springer 2011. pp. 100-101〕 and structured output learning. It has also been invoked to explain the effectiveness of heuristics in human learning.
==Motivation==
The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants to choose a model that both accurately captures the regularities in its training data, but also generalizes well to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well, but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that don't tend to overfit, but may ''underfit'' their training data, failing to capture important regularities.
Models with low bias are usually more complex (e.g. higher-order regression polynomials), enabling them to represent the training set more accurately. In the process, however, they may also represent a large noise component in the training set, making their predictions less accurate - despite their added complexity. In contrast, models with higher bias tend to be relatively simple (low-order or even linear regression polynomials), but may produce lower variance predictions when applied beyond the training set.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bias–variance tradeoff」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.